DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...9
Hits 1 – 20 of 163

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
3
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
On Homophony and Rényi Entropy ...
BASE
Show details
6
On Homophony and Rényi Entropy ...
BASE
Show details
7
Towards Zero-shot Language Modeling ...
BASE
Show details
8
Differentiable Generative Phonology ...
BASE
Show details
9
Finding Concept-specific Biases in Form--Meaning Associations ...
BASE
Show details
10
Searching for Search Errors in Neural Morphological Inflection ...
BASE
Show details
11
Applying the Transformer to Character-level Transduction ...
Wu, Shijie; Cotterell, Ryan; Hulden, Mans. - : ETH Zurich, 2021
BASE
Show details
12
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
13
Probing as Quantifying Inductive Bias ...
BASE
Show details
14
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
15
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
16
Conditional Poisson Stochastic Beams ...
BASE
Show details
17
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
BASE
Show details
18
Modeling the Unigram Distribution ...
BASE
Show details
19
Language Model Evaluation Beyond Perplexity ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.414 Abstract: We propose an alternate approach to quantifying how well language models learn natural language: we ask how well they match the statistical tendencies of natural language. To answer this question, we analyze whether text generated from language models exhibits the statistical tendencies present in the human-generated text on which they were trained. We provide a framework--paired with significance tests--for evaluating the fit of language models to these trends. We find that neural language models appear to learn only a subset of the tendencies considered, but align much more closely with empirical trends than proposed theoretical distributions (when present). Further, the fit to different distributions is highly-dependent on both model architecture and generation strategy. As concrete examples, text generated under the nucleus sampling scheme adheres more closely to the type--token relationship of natural language than text produced using ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/25838-language-model-evaluation-beyond-perplexity
https://dx.doi.org/10.48448/jr48-6p89
BASE
Hide details
20
Differentiable Subset Pruning of Transformer Heads ...
BASE
Show details

Page: 1 2 3 4 5...9

Catalogues
0
0
0
0
0
0
1
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
162
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern